54 research outputs found

    SMIL State: an architecture and implementation for adaptive time-based web applications

    Get PDF
    In this paper we examine adaptive time-based web applications (or presentations). These are interactive presentations where time dictates which parts of the application are presented (providing the major structuring paradigm), and that require interactivity and other dynamic adaptation. We investigate the current technologies available to create such presentations and their shortcomings, and suggest a mechanism for addressing these shortcomings. This mechanism, SMIL State, can be used to add user-defined state to declarative time-based languages such as SMIL or SVG animation, thereby enabling the author to create control flows that are difficult to realize within the temporal containment model of the host languages. In addition, SMIL State can be used as a bridging mechanism between languages, enabling easy integration of external components into the web application. Finally, SMIL State enables richer expressions for content control. This paper defines SMIL State in terms of an introductory example, followed by a detailed specification of the State model. Next, the implementation of this model is discussed. We conclude with a set of potential use cases, including dynamic content adaptation and delayed insertion of custom content such as advertisements. © 2009 Springer Science+Business Media, LLC

    Non-Intrusive User Interfaces for Interactive Digital Television Experiences

    Get PDF
    International audienceThis paper presents a model and architecture for non-intrusive user interfaces in the interactive digital TV domain. The model is based on two concepts: non-monolithic rendering for content consumption and actions descriptions for user interaction. In the first case, subsets of the multimedia content can be delivered to different rendering components (e.g., video to the TV screen and extra information to a handheld device). In the second case, we differentiate between actions, handlers, and activators. An action is the description of the user intentions, a handler implements that action, and an activator is the user interface of the action. Because we define actions instead of user interfaces, the implementation of the activators can take multiple forms: conventional user interfaces (using gestures or speech) and intelligent interfaces, in which the actions are derived from a set of parameters (e.g., number of people in the room or distance to the TV)

    Using SMIL to Encode Interactive, Peer-Level Multimedia Annotations

    No full text
    This paper discusses applying facilities in SMIL 2.0 to the problem of annotating multimedia presentations. Rather than viewing annotations as collections of (abstract) meta-information for use in indexing, retrieval or semantic processing, we view annotations as a set of peer-level content with temporal and spatial relationships that are important in presenting a coherent story to a user. The composite nature of the collection of media is essential to the nature of peer-level annotations: you would typically annotate a single media item much differently than that same media item in the context of a total presentation. This paper focuses on the document engineering aspects of the annotation system. We do not consider any particular user interface for creating the annotations or any back-end storage architecture to save/search the annotations. Instead, we focus on how annotations can be represented within a common document architecture and we consider means of providing document facilities that meet the requirements of our user model. We present our work in the context of a medical patient dossier example

    Models, Media And Motion: Using The Web To Support Multimedia Documents

    Get PDF
    This paper outlines the goals of the W3C's Synchronized Multimedia working group and presents an initial description of the first version of the proposed multimedia document model and format. 1 Introductio

    Managing the Adaptive Processing of Distributed Multimedia Information

    Get PDF
    this document include the placement of information (involving the allocation of screen space and audio channels) and the relative ordering of information. Concretely, if both of the headline streams are allocated the same space on the screen, then only one stream should be selected for display. (The one selected would depend on the user or the document.) Alternatively, the two formatted text streams containing English and Dutch captions could be defined to allow both to be displayed at the same time if a user wished to do so. Note that if these captions have a content-based relationship to the video and/or audio streams, the relative presentation time of each stream also becomes important

    Measuring and manipulating audiences: A personal reflection

    No full text
    Understanding the emotional reactions of audiences to a wide range of content types is an important area of research. In this article, I provide a personal reflection on various approaches to modeling, quantifying and understanding audience behavior based on a broad range of evaluation techniques. Using results from a study of the Heineken Weasel television commercial as a backdrop, I provide an overview of evaluation approaches and their impact in long-Term and real-Time evaluation. The main contribution is a personal reflection on audience evaluation based on multi-situation affinity with the area

    Synchronization of Multi-Sourced Multimedia Data for Heterogeneous Target Systems

    Get PDF
    Abstract. Accessing multimedia information in a networked environment introduces problems to an application designer that don’t exist when the same information is fetched locally. These problems include “competing ” for the allocation of network resources across applications, synchronizing data arrivals from various sources within an application, and supporting multiple data representations across heterogeneous hosts. In this paper, we present a general framework for addressing these problems that is based on the assumption that time-sensitive data can only be controlled by having the application, the operating system(s) and a set of active, intelligent information object coordinate their activities based on an explicit specification of resource, synchronization, and representation information. After presenting the general framework, we describe a document specification structure and two active system components that cooperatively provide support for synchronization and data-transformation problems in a networked multimedia environment. 1 Problem Overvie

    The Ambulant Annotator: Medical Multimedia Annotations on Tablet PC’s

    No full text
    Abstract: A new generation of tablet computers has stimulated end-user interest on annotating documents by making pen-based commentary and spoken audio labels to otherwise static documents. The typical application scenario for most annotation systems is to convert existing content to a (virtual) image, capture annotation mark-up, and to then save the annotations is a database. While this is often acceptable for text documents, most multimedia documents are timesensitive and can be dynamic: content can change often depending on the types of audio/video data used. Our work looks at expanding the possibilities of annotation by integrating annotations onto timed basis media. This paper discusses the AMBULANT Annotation Architecture. We describe requirements for multimedia annotations, the multimedia annotation architecture being developed at CWI, and initial experience from providing various classes of temporal and spatial annotation within the domain of medical documents.
    • …
    corecore